online-within-online meta-learning
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Italy > Liguria > Genoa (0.04)
Online-Within-Online Meta-Learning
We study the problem of learning a series of tasks in a fully online Meta-Learning setting. The goal is to exploit similarities among the tasks to incrementally adapt an inner online algorithm in order to incur a low averaged cumulative error over the tasks. We focus on a family of inner algorithms based on a parametrized variant of online Mirror Descent. The inner algorithm is incrementally adapted by an online Mirror Descent meta-algorithm using the corresponding within-task minimum regularized empirical risk as the meta-loss. In order to keep the process fully online, we approximate the meta-subgradients by the online inner algorithm. An upper bound on the approximation error allows us to derive a cumulative error bound for the proposed method. Our analysis can also be converted to the statistical setting by online-to-batch arguments. We instantiate two examples of the framework in which the meta-parameter is either a common bias vector or feature map. Finally, preliminary numerical experiments confirm our theoretical findings.
Reviews: Online-Within-Online Meta-Learning
This work proposes algorithms for the online-within-online meta-learning setting as oppposed to the more prevalent statistical setting. In this particular meta-learning setting tasks arrive sequentially manner (outer loop) and then the learning per task itself happens in an online fashion. The aim is to have low average regret over tasks. The inner loop optimization is done via Online Mirror Descent (OMD). The inner algorithm design is carefully chosen to provide good approximations of (sub)-gradients of the outer meta objective.
Reviews: Online-Within-Online Meta-Learning
This paper presents a method for online-within-online meta-learning where each task is revealed one after another and online learning is applied for within-task. The primal-dual online learning is the main ingredient. All of reviewers agree that the paper is well written and has valuable contributions, while a few relevant work is already available. During the discussion period, a reviewer with most negative review raised his/her score, enabling us to reach a consensus.
Online-Within-Online Meta-Learning
We study the problem of learning a series of tasks in a fully online Meta-Learning setting. The goal is to exploit similarities among the tasks to incrementally adapt an inner online algorithm in order to incur a low averaged cumulative error over the tasks. We focus on a family of inner algorithms based on a parametrized variant of online Mirror Descent. The inner algorithm is incrementally adapted by an online Mirror Descent meta-algorithm using the corresponding within-task minimum regularized empirical risk as the meta-loss. In order to keep the process fully online, we approximate the meta-subgradients by the online inner algorithm.
Online-Within-Online Meta-Learning
Denevi, Giulia, Stamos, Dimitris, Ciliberto, Carlo, Pontil, Massimiliano
We study the problem of learning a series of tasks in a fully online Meta-Learning setting. The goal is to exploit similarities among the tasks to incrementally adapt an inner online algorithm in order to incur a low averaged cumulative error over the tasks. We focus on a family of inner algorithms based on a parametrized variant of online Mirror Descent. The inner algorithm is incrementally adapted by an online Mirror Descent meta-algorithm using the corresponding within-task minimum regularized empirical risk as the meta-loss. In order to keep the process fully online, we approximate the meta-subgradients by the online inner algorithm.